Session B-1

B-1: Radio Access Networks

Conference
11:00 AM — 12:30 PM PDT
Local
May 21 Tue, 2:00 PM — 3:30 PM EDT
Location
Regency B

Det-RAN: Data-Driven Cross-Layer Real-Time Attack Detection in 5G Open RANs

Alessio Scalingi (IMDEA Networks, Spain); Salvatore D'Oro, Francesco Restuccia and Tommaso Melodia (Northeastern University, USA); Domenico Giustiniano (IMDEA Networks Institute, Spain)

0
Fifth generation (5G) and beyond cellular networks are vulnerable to security threats, mainly due to the lack of integrity protection in the Radio Resource Control (RRC) layer. To address this problem, we propose a real-time anomaly detection framework that builds on the concept of distributed applications in 5G Open RAN networks. Specifically, we analyze the spectrum-level characteristics and infer in a novel way the time of arrival of uplink packets lacking integrity protection, and we identify legitimate message sources and detect suspicious activities through an Artificial Intelligence (AI) design at the edge that handles cross-layer data and demonstrates that Open RAN-based applications can be designed to provide additional security to the network. Our solution is first validated in extensive emulation environments achieving over 85% accuracy in predicting potential attacks on unseen test scenarios. We then integrate our approach into a real-world prototype with a large channel emulator to assess its real-time performance and costs, achieving low-latency real-time constraints of 2 ms. This makes our solution suitable for real-world deployments.
Speaker
Speaker biography is not available.

Providing UE-level QoS Support by Joint Scheduling and Orchestration for 5G vRAN

Jiamei Lv, Yi Gao, Zhi Ding, Yuxiang Lin and Xinyun You (Zhejiang University, China); Guang Yang (Alibaba Group, China); Wei Dong (Zhejiang University, China)

0
Virtualized radio access networks (vRAN) enable network operators to run RAN functions on commodity servers instead of proprietary hardware. It has garnered significant interest due to its ability to reduce costs, provide deployment flexibility, and offer other benefits, particularly for operators of 5G private networks. However, the non-deterministic computing platforms pose difficulties to effective quality of service (QoS) provision, especially in the case of hybrid deployment of time-critical and throughput-demanding applications. Existing approaches including network slicing and other resource management schemes fail to provide fine-grained and effective QoS support at the User Equipments level. In this paper, we propose RT-vRAN, a UE-level QoS provision framework. RT-vRAN presents the first comprehensive analysis of the complicated impacts among key network parameters, e.g., network function splitting, resource block allocation, and modulation/coding scheme selection and builds an accurate and comprehensive network model. RT-vRAN also provides a fast network configurator which gives feasible configurations in seconds, making it possible to be practical in actual 5G vRAN. We implement RT-vRAN on OpenAirInterface and use simulation and testbed-base experiments to evaluate it. Results show that compared with existing works, RT-vRAN reduces the delay violation rate by 12\%--41\% under various network settings, while minimizing the total energy consumption.
Speaker
Speaker biography is not available.

ORANUS: Latency-tailored Orchestration via Stochastic Network Calculus in 6G O-RAN

Oscar Adamuz-Hinojosa (University of Granada, Spain); Lanfranco Zanzi (NEC Laboratories Europe, Germany); Vincenzo Sciancalepore (NEC Laboratories Europe GmbH, Germany); Andres Garcia-Saavedra (NEC Labs Europe, Germany); Xavier Costa-Perez (ICREA and i2cat & NEC Laboratories Europe, Spain)

0
The Open-Radio Access Network (O-RAN) Alliance has introduced a new architecture to enhance the 6th generation (6G) RAN. However, existing O-RAN-compliant solutions lack crucial details to perform effective control loops at multiple time scales. In this vein, we propose ORANUS, an O-RAN-compliant mathematical framework to allocate radio resources to multiple ultra Reliable Low Latency Communication (uRLLC) services at different time scales. In the near-RT control loop, ORANUS relies on a novel Stochastic Network Calculus (SNC)-based model to compute the amount of guaranteed radio resources for each uRLLC service. Unlike traditional approaches as queueing theory, the SNC-based model allows ORANUS to ensure the probability the packet transmission delay exceeds a budget, i.e., the violation probability, is below a target tolerance. ORANUS also utilizes a RT control loop to monitor service transmission queues, dynamically adjusting the guaranteed radio resources based on detected traffic anomalies. To the best of our knowledge, ORANUS is the first O-RAN-compliant solution which benefits from SNC to carry out near-RT and RT control loops. Simulation results show that ORANUS significantly improves over reference solutions, with an average violation probability 10x lower.
Speaker
Speaker biography is not available.

OREO: O-RAN intElligence Orchestration of xApp-based network services

Federico Mungari and Corrado Puligheddu (Politecnico di Torino, Italy); Andres Garcia-Saavedra (NEC Labs Europe, Germany); Carla Fabiana Chiasserini (Politecnico di Torino & CNIT, IEIIT-CNR, Italy)

0
The Open Radio Access Network (O-RAN) architecture aims to support a plethora of network services, such as beam management and network slicing, through the use of third-party applications called xApps. To efficiently provide network services at the radio interface, it is thus essential that the deployment of the xApps is carefully orchestrated. In this paper, we introduce OREO, an O-RAN xApp orchestrator, designed to maximize the number of offered services. OREO's key idea is that services can share xApps whenever they correspond to semantically equivalent functions, and the xApp output is of sufficient quality to fulfill the service requirements. By leveraging a multi-layer graph model that captures all the system components, from services to xApps, OREO implements an algorithmic solution that selects the best service configuration, maximizes the number of shared xApps, and efficiently and dynamically allocates resources to them. Numerical results as well as experimental tests performed using our proof-of-concept implementation, demonstrate that OREO closely matches the optimum, obtained by solving an NP-hard problem. Further, it outperforms the state of the art, deploying up to 35% more services with an average of 28% fewer xApps and a similar consequent reduction in the resource consumption.
Speaker
Speaker biography is not available.

Session Chair

Ning Lu (Queen's University, Canada)

Enter Zoom
Session B-2

B-2: MIMO and Beamforming

Conference
2:00 PM — 3:30 PM PDT
Local
May 21 Tue, 5:00 PM — 6:30 PM EDT
Location
Regency B

NOMA-Enhanced Quantized Uplink Multi-user MIMO Communications

Thanh Phung Truong, Anh-Tien Tran and Van Dat Tuong (Chung-Ang University, Korea (South)); Nhu-Ngoc Dao (Sejong University, Korea (South)); Sungrae Cho (Chung-Ang University, Korea (South))

0
This research examines quantized uplink multi-user MIMO communication systems with low-resolution quantizers at users and base stations (BS). In such a system, we employ the non-orthogonal multiple access (NOMA) technique for communication between users and the BS to enhance communication performance. To maximize the number of users that satisfy the quality of service (QoS) requirement while minimizing the user's transmit power, we jointly optimize the transmit power and precoding matrices at the users and the digital beamforming matrix at the BS. Owing to the non-convexity of the objective function, we transform the problem into a reinforcement learning-based problem and propose a deep reinforcement learning (DRL) framework named QNOMA-DRLPA to overcome the challenge. Because the nature of the action decided by the DRL algorithm may not satisfy the problem constraints, we propose a post-actor process to redesign the actions to meet all the problem constraints. In the simulation, we assess the proposed framework's performance in training convergence and demonstrate its superior performance under various environmental parameters compared with other benchmark schemes.
Speaker
Speaker biography is not available.

A Learning-only Method for Multi-Cell Multi-User MIMO Sum Rate Maximization

Qingyu Song (The Chinese University of Hong Kong, Hong Kong); Juncheng Wang (Hong Kong Baptist University, Hong Kong); Jingzong Li (City University of Hong Kong, Hong Kong); Guochen Liu (Huawei Noah's Ark Lab, China); Hong Xu (The Chinese University of Hong Kong, Hong Kong)

0
Solving the sum rate maximization problem for interference reduction in multi-cell multi-user multiple-input multiple-output (MIMO) wireless communication systems has been investigated for a decade. Several machine learning-assisted methods have been proposed under conventional sum rate maximization frameworks, such as the Weighted Minimum Mean Square Error (WMMSE) framework. However, existing learning-assisted methods suffer from a deficiency in parallelization, and their performance is intrinsically bounded by WMMSE. In contrast, we propose a structural learning-only framework from the abstraction of WMMSE. Our proposed framework increases the solvability of the original MIMO sum rate maximization problem by dimension expansion via a unitary learnable parameter matrix to create an equivalent problem in a higher dimension. We then propose a structural solution updating method to solve the higher dimensional problem, utilizing neural networks to generate the learnable matrix-multiplication parameters. The proposed structural solution updating method achieves lower complexity than WMMSE thanks to its parallel implementation. Simulation results under practical communication network settings demonstrate that our proposed learning-only framework achieves up to 98\% optimality over state-of-the-art algorithms while providing up to 47\(\times\) acceleration in various scenarios.
Speaker
Speaker biography is not available.

HoloBeam: Learning Optimal Beamforming in Far-Field Holographic Metasurface Transceivers

Debamita Ghosh and Manjesh K Hanawal (Indian Institute of Technology Bombay, India); Nikola Zlatanov (Innopolis University, Russia)

0
Holographic Metasurface Transceivers (HMTs) are emerging as cost-effective substitutes to large antenna arrays for beamforming in Millimeter and TeraHertz wave communication. However, to achieve desired channel gains through beamforming in HMT, phase-shifts of a large number of elements need to be appropriately set, which is challenging. Also, these optimal phase-shifts depend on the location of the receivers, which could be unknown. In this work, we develop a learning algorithm using a fixed-budget multi-armed bandit framework to beamform and maximize received signal strength at the receiver for far-field regions. Our algorithm, named Holographic Beam (HoloBeam) exploits the parametric form of channel gains of the beams, which can be expressed in terms of two phase-shifting parameters. Even after parameterization, the problem is still challenging as phase-shifting parameters take continuous values. To overcome this, HoloBeam works with the discrete values of phase-shifting parameters and exploits their unimodal relations with channel gains to learn the optimal values faster. We upper bound the probability of HoloBeam incorrectly identifying the (discrete) optimal parameters in terms of the number of pilots used in learning. We show that this probability decays exponentially with the number of pilot signals. We demonstrate that HoloBeam outperforms state-of-the-art algorithms through extensive simulations.
Speaker
Speaker biography is not available.

FTP: Enabling Fast Beam-Training for Optimal mmWave Beamforming

Wei-Han Chen, Xin Liu, Kannan Srinivasan and Srinivasan Parthasarathy (The Ohio State University, USA)

0
To maximize Signal-to-Noise Ratio (SNR), it is necessary to move beyond selecting beams from a codebook. While the state-of-the-art approaches can significantly improve SNR compared to codebook-based beam selection by exploiting the globally-optimal beam, they incur significant beam-training overhead, which limits the applicability to large-scale antenna arrays and the scalability for multiple users. In this paper, we propose FTP, a highly-scalable beam-training solution that can find the globally-optimal beam with minimal beam-training overhead. FTP works by estimating per-path direction along with its complex gain and synthesizes the globally-optimal beam from these parameters. Our design significantly reduces the search space for finding such path parameters, which enables FTP to scale to large-scale antenna arrays. We implemented and evaluated FTP on a mmWave experimental platform with 32 antenna elements. Our results demonstrate that FTP achieves optimal SNR performance comparable with the state-of-the-art while reducing the beam-training overhead by 3 orders of magnitude. Under simulated settings, we demonstrate that the gain of FTP can be even more significant for larger antenna arrays with up to 1024 elements.
Speaker
Speaker biography is not available.

Session Chair

Joerg Widmer (IMDEA Networks Institute, Spain)

Enter Zoom
Session B-3

B-3: Satellite networks

Conference
4:00 PM — 5:30 PM PDT
Local
May 21 Tue, 7:00 PM — 8:30 PM EDT
Location
Regency B

Your Mega-Constellations Can be Slim: A Cost-Effective Approach for Constructing Survivable and Performant LEO Satellite Networks

Zeqi Lai, Yibo Wang, Hewu Li and Qian Wu (Tsinghua University, China); Qi Zhang (Zhongguancun Laboratory, China); Yunan Hou (Beijing Forestry University, China); Jun Liu and Yuanjie Li (Tsinghua University, China)

0
Recently we have witnessed the active deployment of satellite mega-constellations with hundreds to thousands of low earth orbit (LEO) satellites, constructing emerging LEO satellite networks (LSN) to provide ubiquitously Internet services globally. However, while the massive deployment of mega-constellations can improve the network survivability and performance of an LSN, it also involves additional sustainable challenges such as higher deployment cost, risk of satellite conjunction, and debris.

In this paper, we investigate the problem: from a network perspective, how many satellites exactly do we need to construct a survivable and performant LSN? To answer this question, we first formulate the survivable and performant LSN design (SPLD) problem, which aims to find the minimum number of needed satellites to construct an LSN that can provide a sufficient amount of redundant paths, link capacity, and acceptable latency for all communication pairs served by the LSN. Second, to efficiently solve the SPLD problem, we propose MEGAREDUCE, a requirement-driven optimization mechanism, which can calculate feasible solutions for SPLD in polynomial time. Finally, we conduct extensive trace-driven simulations to verify MEGAREDUCE's cost-effectiveness in constructing survivable and performant LSNs on demand and showcase how MEGAREDUCE can help optimize the incremental deployment and long-term maintenance of future LSNs.
Speaker Yu-Tsen Tsai
Speaker biography is not available.

Accelerating Handover in Mobile Satellite Network

Jiasheng Wu, Shaojie Su, Xiong Wang, Jingjing Zhang and Yue Gao (Fudan University, China)

0
In recent years, the construction of large Low Earth Orbit (LEO) satellite constellations such as Starlink spurs a huge interest from both acamedia and industry. The 6G standard has recognized LEO satellite networks as a key component of the future 6G network due to their wide coverage. However, terminals on the ground experience frequent, long-latency handover incurred by the fast travelling speed of LEO satellites, which negatively impacts latency-sensitive applications. To address this challenge, we propose a novel handover scheme in mobile LEO satellite networks which can considerably reduce the handover latency. The core idea is to predict users' access satellites, avoiding direct interaction between satellites and core network. We introduce a fine-grained transmission process to address the synchronization problem. Moreover, we reduce the computational complexity of prediction by utilizing known information, including computation results, satellite's access strategy, and their spatial distribution. Finally, we have built a prototype for mobile satellite network, which is driven by ephemeris of real LEO satellite constellations. Then, we have conducted extensive experiments and results demonstrate that our proposed handover scheme can considerably reduce the handover latency by 10x compared to the standard NTN handover scheme and two other existing schemes.
Speaker
Speaker biography is not available.

SKYCASTLE: Taming LEO Mobility to Facilitate Seamless and Low-latency Satellite Internet Services

Jihao Li, Hewu Li, Zeqi Lai, Qian Wu and Weisen Liu (Tsinghua University, China); Xiaomo Wang (China Academy of Electronics and Information Technology, China); Yuanjie Li and Jun Liu (Tsinghua University, China); Qi Zhang (Zhongguancun Laboratory, China)

0
Recent satellite constellations deployed in low earth orbit (LEO) are extending the boundary of today's Internet, constructing integrated space and terrestrial networks (ISTNs) to provide Internet services pervasively, not only for residential users, but also for mobile users such as airplanes. Efficiently managing global mobility and keeping connections active is critical for operators. However, our quantitative analysis identifies that existing mobility management (MM) schemes inherently suffer from frequent connection interruptions and long latency. The fundamental challenge stems from a unique characteristic of ISTNs: not only users are mobile, but also core network infrastructures (i.e., satellites) are changing locations in networks.

To facilitate seamless and low-latency Internet services, this paper presents SKYCASTLE, a novel network-based global mobility management mechanism. SKYCASTLE incorporates two key techniques to address connection interruptions caused by space-ground handovers. First, to reduce connection interruptions, SKYCASTLE adopts distributed satellite anchors to track the location changes of mobile nodes, manage handovers and accelerate routing convergence. Second, SKYCASTLE leverages an anchor manager to schedule MM functionalities at satellites to reduce deployment costs while guaranteeing latency. Extensive evaluations combining real constellation information and popular flight trajectories demonstrate that: SKYCASTLE can improve uninterrupted time by up to 55.8% and reduce latency by 47.8%.
Speaker
Speaker biography is not available.

Resource-efficient In-orbit Detection of Earth Objects

QiYang Zhang (Beijing University of Posts & Telecommunications, China); Xin Yuan and Ruolin Xing (Beijing University of Posts and Telecommunications, China); Yiran Zhang (Beijing University of Posts and Telecommunication, China); Zimu Zheng (Huawei Technologies Co., Ltd, China); Xiao Ma and Mengwei Xu (Beijing University of Posts and Telecommunications, China); Schahram Dustdar (Vienna University of Technology, Austria); Shangguang Wang (Beijing University of Posts and Telecommunications, China)

0
With the rapid proliferation of large Low Earth Orbit (LEO) satellite constellations, a huge amount of in-orbit data is generated and needs to be transmitted to the ground for processing. However, traditional bent pipe architectures of LEO satellite constellations, which downlink raw data to the ground, are significantly restricted in transmission capability due to the scarce spectrum resources and limited satellite-ground connection duration. Orbital edge computing (OEC), which exploits the computation capacities of LEO satellites and processes the raw data in orbit, is envisioned as a promising solution to relieve the downlink transmission burden. Yet, with OEC, the bottleneck is shifted to the inelastic computation capacities and limited energy supply of satellites. To address both the in-orbit computation and downlink transmission bottleneck, we fully exploit the scarce satellite resources to compute and downlink as many images as possible that need to be downlinked to the ground. Therefore, we seek to explore satellite-ground collaboration and present a satellite-ground collaborative system named TargetFuse. TargetFuse incorporates a combination of techniques to minimize computing errors under energy and bandwidth constraints. Extensive experiments show that TargetFuse can reduce computing error by 3.4× on average, compared to onboard computing.
Speaker
Speaker biography is not available.

Session Chair

Dimitrios Koutsonikolas (Northeastern University, USA)

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.